569 research outputs found

    The Importance of Modularity in Bioinformatics Tools

    Get PDF
    In the last decade the amount of Bioinformatics tools has increased enormously. There are tools to store, analyse, visualize, edit or generate biological data and there are still more in development. Still, the demand for increased functionality in a single piece of software must be balanced by the need for modularity to keep the software maintainable. In complex systems, the conflicting demands of features and maintainability are often solved by plug-in systems.

For example Cytoscape, an open source platform for Complex-Network Analysis and Visualization, is using a plug-in system to allow the extension of the application without changing the core. This not only allows the integration of new functionality without a new release but offers the possibility for other developers to contribute plug-ins which are needed in their research.

Most tools have their own, individual plug-in system to meet the needs of the application. These are often very simple and easy to use. However, the increasing complexity of plug-ins demands more functionality of the plug-in system. We want to reuse components in different contexts, we want to have simple plug-in interfaces and we want to allow communication and dependencies between plug-ins. Many tools implemented in Java are facing these problems and there seems to be a common solution: the integration of an established modularity framework, like OSGi. To our knowledge, a number of developers of bioinformatics tools are already implementing, planning or thinking about the integration of OSGi into their applications, e.g. Cytoscape, Protege, PathVisio, ImageJ, Jalview or Chipster. The adoption of modularity frameworks in the development of bioinformatics applications is steadily increasing and should be considered in the design of new software.

By modularity in the traditional computer science sense, we mean the division of a software application into logical parts with separate concerns. To ease the development of software tools the application is separated into smaller logical parts, which are implemented individually. A set of modules can form a larger application but only if a proper glue is used, OSGi is an example of such a glue. OSGi allows to build an infrastructure into an application to add and use different modules. It provides mechanisms to allow the individual modules to rely on and interact with each other, opening the possibility to put together different modules to solve the problem at hand. Later, modules can be removed and new ones can be added to tackle another problem. As Katy Boerner in her article 'Plug-and-Play Macroscopes' writes, we should 'implement software frameworks that empower domain scientists to assemble their own continuously evolving macroscopes, adding and upgrading existing (and removing obsolete) plug-ins to arrive at a set that is truly relevant for their work'.

Some of these modules are going to be specific for one application but a lot of these modules can actually be reused by other tools. We are talking about general features like the import or export of different file formats, a layout algorithm that could be used by several visualization tools or the lookup in an external online database. Why should every tool implement its own parser or algorithm? Modularity can help to share functionality. There is no need to start from scratch and implement everything anew, thus developers can focus on new and important features.

Adding modularity, or better, a modularity framework to an existing software application is not a trivial task. The developers of Cytoscape are currently undertaking this challenge with the coming version 3. We are also working on the integration of OSGi into our pathway visualization tool PathVisio and we now want to share and compare our experiences, so others can benefit from our discoveries. This will not only help them in making a decision if OSGi is a suitable solution for them but also in the integration process itself

    WikiPathways: building research communities on biological pathways.

    Get PDF
    Here, we describe the development of WikiPathways (http://www.wikipathways.org), a public wiki for pathway curation, since it was first published in 2008. New features are discussed, as well as developments in the community of contributors. New features include a zoomable pathway viewer, support for pathway ontology annotations, the ability to mark pathways as private for a limited time and the availability of stable hyperlinks to pathways and the elements therein. WikiPathways content is freely available in a variety of formats such as the BioPAX standard, and the content is increasingly adopted by external databases and tools, including Wikipedia. A recent development is the use of WikiPathways as a staging ground for centrally curated databases such as Reactome. WikiPathways is seeing steady growth in the number of users, page views and edits for each pathway. To assess whether the community curation experiment can be considered successful, here we analyze the relation between use and contribution, which gives results in line with other wiki projects. The novel use of pathway pages as supplementary material to publications, as well as the addition of tailored content for research domains, is expected to stimulate growth further

    BridgeDb: standardized access to gene, protein and metabolite identifier mapping services

    Get PDF
    Many interesting problems in bioinformatics require integration of data from various sources. For example when combining microarray data with a pathway database, or merging co-citation networks with protein-protein interaction networks. Invariably this leads to an identifier mapping problem, where different datasets are annotated with identifiers that are related, but originate from different databases.

Solutions for the identifier mapping problem exist, such as Biomart, Synergizer, Cronos, PICR, HMS and many more. This creates an opportunity for bioinformatics tool developers. Tools can be made to flexibly support multiple mapping services or mapping services could be combined to get broader coverage. This approach requires an interface layer between tools and mapping services. BridgeDb provides such an interface layer, in the form of both a Java and REST API.

Because of the standardized interface layer, BridgeDb is not tied to a specific source of mapping information. You can switch easily between flat files, relational databases and several different web services. Mapping services can be combined to support multi-omics experiments or to integrate custom microarray annotations. BridgeDb isn't just yet another mapping service: it tries to build further on existing work, and integrate multiple partial solutions. The framework is intended for customization and adaptation to any identifier mapping service. 

BridgeDb makes it easy to add an important capability to existing tools. BridgeDb has already been integrated into several popular bioinformatics applications, such as Cytoscape, WikiPathways, PathVisio, Vanted and Taverna. To encourage tool developers to start using BridgeDb, we've created code examples, online documentation, and a mailinglist to ask questions. 

We believe that, to meet the challenges that are encountered in bioinformatics today, the software development process should follow a few essential principles: user friendliness, code reuse, modularity and open source. BridgeDb adheres to these principles, and can serve as a useful model for others to follow. BridgeDb can function to increase user-friendliness of graphical applications. It re-uses work from other projects such as BioMart and MIRIAM. BridgeDb consists of several small modules, integrated through a common interface (API). Components of BridgeDb can be left out or replaced, for maximum flexibility. BridgeDb was open source from the very beginning of the project. The philosophy of open source is closely aligned to academic values, of building on top of the work of giants. 

Many interesting problems in bioinformatics require integration of data from various sources. For example when combining microarray data with a pathway database, or merging co-citation networks with protein-protein interaction networks. Invariably this leads to an identifier mapping problem, where different datasets are annotated with identifiers that are related, but originate from different databases.

Solutions for the identifier mapping problem exist, such as Biomart, Synergizer, Cronos, PICR, HMS and many more. This creates an opportunity for bioinformatics tool developers. Tools can be made to flexibly support multiple mapping services or mapping services could be combined to get broader coverage. This approach requires an interface layer between tools and mapping services. BridgeDb provides such an interface layer, in the form of both a Java and REST API.

Because of the standardized interface layer, BridgeDb is not tied to a specific source of mapping information. You can switch easily between flat files, relational databases and several different web services. Mapping services can be combined to support multi-omics experiments or to integrate custom microarray annotations. BridgeDb isn't just yet another mapping service: it tries to build further on existing work, and integrate multiple partial solutions. The framework is intended for customization and adaptation to any identifier mapping service. 

BridgeDb makes it easy to add an important capability to existing tools. BridgeDb has already been integrated into several popular bioinformatics applications, such as Cytoscape, WikiPathways, PathVisio, Vanted and Taverna. To encourage tool developers to start using BridgeDb, we've created code examples, online documentation, and a mailinglist to ask questions. 

We believe that, to meet the challenges that are encountered in bioinformatics today, the software development process should follow a few essential principles: user friendliness, code reuse, modularity and open source. BridgeDb adheres to these principles, and can serve as a useful model for others to follow. BridgeDb can function to increase user-friendliness of graphical applications. It re-uses work from other projects such as BioMart and MIRIAM. BridgeDb consists of several small modules, integrated through a common interface (API). Components of BridgeDb can be left out or replaced, for maximum flexibility. BridgeDb was open source from the very beginning of the project. The philosophy of open source is closely aligned to academic values, of building on top of the work of giants. 

The BridgeDb library is available at "http://www.bridgedb.org":http://www.bridgedb.org.
A paper about BridgeDb was published in BMC _Bioinformatics_, 2010 Jan 4;11(1):5.

BridgeDb blog: "http://www.helixsoft.nl/blog/?tag=bridgedb":http://www.helixsoft.nl/blog/?tag=bridged

    Pharmacokinetics of vibunazole (Bay n7133) administered orally to healthy subjects

    Get PDF
    Contains fulltext : 4447.pdf (publisher's version ) (Open Access

    Evaluation of robustly optimised intensity modulated proton therapy for nasopharyngeal carcinoma

    Get PDF
    BACKGROUND AND PURPOSE: To evaluate the dosimetric changes occurring over the treatment course for nasopharyngeal carcinoma (NPC) patients treated with robustly optimised intensity modulated proton therapy (IMPT). MATERIALS AND METHODS: 25 NPC patients were treated to two dose levels (CTV1: 70Gy, CTV2: 54.25Gy) with robustly optimised IMPT plans. Robustness evaluation was performed over 28 error scenarios using voxel-wise minimum distributions to assess target coverage and voxel-wise maximum distributions to assess possible hotspots and critical organ doses. Daily CBCT was used for positioning and weekly repeat CTs (rCT) were taken, on which the plan dose was recalculated and robustly evaluated. Deformable image registration was used to warp and accumulate the nominal, voxel-wise minimum and maximum rCT dose distributions. Changes to target coverage, critical organ and normal tissue dose between the accumulated and planned doses were investigated. RESULTS: 2 patients required a plan adaptation due to reduced target coverage. The D98% in the accumulated voxel-wise minimum distribution was higher than planned for CTV1 in 24/25 patients and for CTV2 in 20/25 patients. Maximum doses to the critical organs remained acceptable in all patients. Other normal tissue doses showed some variation as a result of soft tissue deformations and weight change. Normal tissue complication probabilities for grade ≥2 dysphagia and grade ≥2 xerostomia remained similar to planned values. CONCLUSION: Robustly optimised IMPT plans, in combination with volumetric verification imaging and adaptive planning, provided robust target coverage and acceptable OAR dose variation in our NPC cohort when accumulated over longitudinal data

    Stereotactic large-core needle breast biopsy: analysis of pain and discomfort related to the biopsy procedure

    Get PDF
    The purpose of this study was to determine the significance of variables such as duration of the procedure, type of breast tissue, number of passes, depth of the biopsies, underlying pathology, the operator performing the procedure, and their effect on women’s perception of pain and discomfort during stereotactic large-core needle breast biopsy. One hundred and fifty consecutive patients with a non-palpable suspicious mammographic lesions were included. Between three and nine 14-gauge breast passes were taken using a prone stereotactic table. Following the biopsy procedure, patients were asked to complete a questionnaire. There was no discomfort in lying on the prone table. There is no relation between type of breast lesion and pain, underlying pathology and pain and performing operator and pain. The type of breast tissue is correlated with pain experienced from biopsy (P = 0.0001). We found out that patients with dense breast tissue complain of more pain from biopsy than patients with more involution of breast tissue. The depth of the biopsy correlates with pain from biopsy (P = 0.0028). Deep lesions are more painful than superficial ones. There is a correlation between the number of passes and pain in the neck (P = 0.0188) and shoulder (P = 0.0366). The duration of the procedure is correlated with pain experienced in the neck (P = 0.0116) but not with pain experienced from biopsy

    Answering biological questions: querying a systems biology database for nutrigenomics

    Get PDF
    The requirement of systems biology for connecting different levels of biological research leads directly to a need for integrating vast amounts of diverse information in general and of omics data in particular. The nutritional phenotype database addresses this challenge for nutrigenomics. A particularly urgent objective in coping with the data avalanche is making biologically meaningful information accessible to the researcher. This contribution describes how we intend to meet this objective with the nutritional phenotype database. We outline relevant parts of the system architecture, describe the kinds of data managed by it, and show how the system can support retrieval of biologically meaningful information by means of ontologies, full-text queries, and structured queries. Our contribution points out critical points, describes several technical hurdles. It demonstrates how pathway analysis can improve queries and comparisons for nutrition studies. Finally, three directions for future research are given

    Process‐informed subsampling improves subseasonal rainfall forecasts in Central America

    Get PDF
    Subseasonal rainfall forecast skill is critical to support preparedness for hydrometeorological extremes. We assess how a process-informed evaluation, which subsamples forecasting model members based on their ability to represent potential predictors of rainfall, can improve monthly rainfall forecasts within Central America in the following month, using Costa Rica and Guatemala as test cases. We generate a constrained ensemble mean by subsampling 130 members from five dynamic forecasting models in the C3S multimodel ensemble based on their representation of both (a) zonal wind direction and (b) Pacific and Atlantic sea surface temperatures (SSTs), at the time of initialization. Our results show in multiple months and locations increased mean squared error skill by 0.4 and improved detection rates of rainfall extremes. This method is transferrable to other regions driven by slowly-changing processes. Process-informed subsampling is successful because it identifies members that fail to represent the entire rainfall distribution when wind/SST error increases
    corecore